在BIN之间传输多个对象是许多应用程序的常用任务。在机器人学中,标准方法是拿起一个对象并一次转移它。然而,抓住和拾取多个物体并立即将它们转移在一起更有效。本文介绍了一组新颖的策略,用于有效地抓住一个垃圾箱中的多个物体以将它们转移到另一个物体。该策略使机器人手能够识别最佳现成的手配置(预先掌握),并根据要掌握所需的物体计算屈曲协同作用。本文还提出了一种方法,它使用Markov决策过程(MDP)在所需的数量大于单个掌握的能力时模拟拾取传输例程。使用MDP模型,所提出的方法可以产生最佳的拾取传输程序,以最小化传输的数量,表示效率。所提出的方法已经在模拟环境和真正的机器人系统中进行了评估。结果表明,与最佳单一物体拣选 - 转移溶液相比,该方法将转移数59%和电梯数量减少58%。
translated by 谷歌翻译
人类手可以通过仅基于触觉感测的堆掌握一下目标数量的物体。为此,机器人需要在堆中掌握,从提升之前感测掌握中的物体的数量,并预测升降后将保持掌握的物体数量。这是一个具有挑战性的问题,因为在进行预测时,机器人手仍然在桩中,并且抓握中的物体对视觉系统不观察到。此外,在从堆中抬起之前手掌抓住的一些物体可能会在手中抬起时掉落。出现这种情况,因为它们被堆中的其他物体支持而不是手指。因此,机器人手应该在提升之前使用其触觉传感器来感测掌握的物体的数量。本文介绍了用于解决此问题的新型多目标抓取分析方法。它们包括掌握体积计算,触觉力分析和数据驱动的深度学习方法。该方法已经在Barrett手上实施,然后在模拟中评估和具有机器人系统的真实设置。评估结果得出结论,一旦BarretT手掌掌握了多个物体,数据驱动的模型可以在提升之前预测,在提升之后将保留在手中的物体的数量。用于我们方法的根均方误差为30.74,用于模拟的立方体和0.58个,球的距离,1.06个球体,对于真实系统的立方体,1.45。
translated by 谷歌翻译
用于图像分类的最可公开的数据集是单个标签,而图像在我们的日常生活中是固有的多标记。这种注释差距使得许多预先接受的单标准分类模型在实际情况下失败。该注释问题更加关注空中图像:从传感器收集的空中数据自然地覆盖具有多个标签的相对大的陆地面积,而被广泛可用的注释空中数据集(例如,UCM,AID)是单标记的。作为手动注释的多标签空中图像将是时间/劳动,我们提出了一种新的自我校正综合域适应(SCIDA)方法,用于自动多标签学习。 SCIDA是弱监督,即,自动学习多标签图像分类模型,从使用大量的公共可用的单一标签图像。为实现这一目标,我们提出了一种新颖的标签 - 明智的自我校正(LWC)模块,以更好地探索潜在的标签相关性。该模块还使无监督的域适配(UDA)从单个到多标签数据中可能。对于模型培训,所提出的型号仅使用单一标签信息,但不需要先验知识的多标记数据;它预测了多标签空中图像的标签。在我们的实验中,用单标签的MAI-AID-S和MAI-UCM-S数据集接受培训,所提出的模型直接在收集的多场景空中图像(MAI)数据集上进行测试。
translated by 谷歌翻译
由于医学成像社区缺乏质量注释,半监督学习方法在图像语义分割任务中受到高度重视。在本文中,提出了一种先进的一致性感知伪标签的自我同学方法,以充分利用视觉变压器(VIT)和卷积神经网络(CNN)的力量。我们提出的框架由一个功能学习模块组成,该模块由VIT和CNN相互增强,以及一个适合一致性意识的指导模块。伪标签是通过特征学习模块中的CNN和VIT的视图来重复和分别使用的,以扩展数据集,并且相互有益。同时,为特征学习模块设计了扰动方案,并利用平均网络权重来开发指导模块。通过这样做,该框架结合了CNN和VIT的特征学习强度,通过双视图共同训练增强性能,并以半监督的方式实现一致性的监督。对CNN和VIT的所有替代监督模式进行了拓扑探索,经过详细验证,证明了我们在半监督医学图像分割任务上的最有希望的性能和特定设置。实验结果表明,所提出的方法在带有各种指标的公共基准数据集上实现了最先进的性能。该代码公开可用。
translated by 谷歌翻译
在这项工作中,我们以一种充满挑战的自我监督方法研究无监督的领域适应性(UDA)。困难之一是如何在没有目标标签的情况下学习任务歧视。与以前的文献直接使跨域分布或利用反向梯度保持一致,我们建议域混淆对比度学习(DCCL),以通过域难题桥接源和目标域,并在适应后保留歧视性表示。从技术上讲,DCCL搜索了最大的挑战方向,而精美的工艺领域将增强型混淆为正对,然后对比鼓励该模型向其他领域提取陈述,从而学习更稳定和有效的域名。我们还研究对比度学习在执行其他数据增强时是否必然有助于UDA。广泛的实验表明,DCCL明显优于基准。
translated by 谷歌翻译
分子动力学(MD)仿真通过用数字积分器解决牛顿运动方程来预测原子的轨迹。由于物理限制,积分器的时间步长需要很小以维持足够的精度。这限制了模拟效率。为此,我们介绍了一个基于图形神经网络(GNN)的模型,MDNet,以预测坐标和动量的演变与大的时间阶跃。此外,由于其线性复杂性相对于系统尺寸,MDNET可以容易地扩展到更大的系统。我们展示了MDNET在具有大时间步骤的4000原子系统上的性能,并显示MDNET可以预测良好的平衡和运输特性,与标准MD模拟良好对齐。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
Increasing research interests focus on sequential recommender systems, aiming to model dynamic sequence representation precisely. However, the most commonly used loss function in state-of-the-art sequential recommendation models has essential limitations. To name a few, Bayesian Personalized Ranking (BPR) loss suffers the vanishing gradient problem from numerous negative sampling and predictionbiases; Binary Cross-Entropy (BCE) loss subjects to negative sampling numbers, thereby it is likely to ignore valuable negative examples and reduce the training efficiency; Cross-Entropy (CE) loss only focuses on the last timestamp of the training sequence, which causes low utilization of sequence information and results in inferior user sequence representation. To avoid these limitations, in this paper, we propose to calculate Cumulative Cross-Entropy (CCE) loss over the sequence. CCE is simple and direct, which enjoys the virtues of painless deployment, no negative sampling, and effective and efficient training. We conduct extensive experiments on five benchmark datasets to demonstrate the effectiveness and efficiency of CCE. The results show that employing CCE loss on three state-of-the-art models GRU4Rec, SASRec, and S3-Rec can reach 125.63%, 69.90%, and 33.24% average improvement of full ranking NDCG@5, respectively. Using CCE, the performance curve of the models on the test data increases rapidly with the wall clock time, and is superior to that of other loss functions in almost the whole process of model training.
translated by 谷歌翻译